Semi-Supervised Training in Deep Learning Acoustic Model
نویسندگان
چکیده
We studied the semi-supervised training in a fully connected deep neural network (DNN), unfolded recurrent neural network (RNN), and long short-term memory recurrent neural network (LSTM-RNN) with respect to the transcription quality, the importance data sampling, and the training data amount. We found that DNN, unfolded RNN, and LSTM-RNN are increasingly more sensitive to labeling errors. For example, with the simulated erroneous training transcription at 5%, 10%, or 15% word error rate (WER) level, the semi-supervised DNN yields 2.37%, 4.84%, or 7.46% relative WER increase against the baseline model trained with the perfect transcription; in comparison, the corresponding WER increase is 2.53%, 4.89%, or 8.85% in an unfolded RNN and 4.47%, 9.38%, or 14.01% in an LSTMRNN. We further found that the importance sampling has similar impact on all three models with 2∼3% relative WER reduction comparing to the random sampling. Lastly, we compared the modeling capability with increased training data. Experimental results suggested that LSTM-RNN can benefit more from enlarged training data comparing to unfolded RNN and DNN. We trained a semi-supervised LSTM-RNN using 2600 hr transcribed and 10100 hr untranscribed data on a mobile speech task. The semi-supervised LSTM-RNN yields 6.56% relative WER reduction against the supervised baseline.
منابع مشابه
Exploiting Eigenposteriors for Semi-Supervised Training of DNN Acoustic Models with Sequence Discrimination
Deep neural network (DNN) acoustic models yield posterior probabilities of senone classes. Recent studies support the existence of low-dimensional subspaces underlying senone posteriors. Principal component analysis (PCA) is applied to identify eigenposteriors and perform low-dimensional projection of the training data posteriors. The resulted enhanced posteriors are applied as soft targets for...
متن کاملAutomatic Pronunciation Generation by Utilizing a Semi-Supervised Deep Neural Networks
Phonemic or phonetic sub-word units are the most commonly used atomic elements to represent speech signals in modern ASRs. However they are not the optimal choice due to several reasons such as: large amount of effort required to handcraft a pronunciation dictionary, pronunciation variations, human mistakes and under-resourced dialects and languages. Here, we propose a data-driven pronunciation...
متن کاملInvestigations on ensemble based semi-supervised acoustic model training
Semi-supervised learning has been recognized as an effective way to improve acoustic model training in cases where sufficient transcribed data are not available. Different from most of existing approaches only using single acoustic model and focusing on how to refine it, this paper investigates the feasibility of using ensemble methods for semi-supervised acoustic modeling training. Two methods...
متن کاملSemi-supervised maximum mutual information training of deep neural network acoustic models
Maximum Mutual Information (MMI) is a popular discriminative criterion that has been used in supervised training of acoustic models for automatic speech recognition. However, standard discriminative training is very sensitive to the accuracy of the transcription and hence its implementation in a semisupervised setting requires extensive filtering of data. We will show that if the supervision tr...
متن کاملSemi-supervised Learning with Sparse Autoencoders in Phone Classification
We propose the application of a semi-supervised learning method to improve the performance of acoustic modelling for automatic speech recognition based on deep neural networks. As opposed to unsupervised initialisation followed by supervised fine tuning, our method takes advantage of both unlabelled and labelled data simultaneously through minibatch stochastic gradient descent. We tested the me...
متن کامل